7 research outputs found

    A user-dependent approach to the perception of high-level semantics of music

    Get PDF

    Prediction of musical affect using a combination of acoustic structural cues

    No full text
    This study explores whether musical affect attribution can be predicted by a linear combination of acoustical structural cues. To that aim, a database of sixty musical audio excerpts was compiled and analyzed at three levels: judgments of affective content by subjects; judgments of structural content by musicological experts (i.e., "manual structural cues"), and extraction of structural content by an auditory-based computer algorithm (called: acoustical structural cues). In Study 1, an affect space was constructed with Valence (gay-sad), Activity (tender-bold) and Interest (exciting-boring) as the main dimensions, using the responses of a hundred subjects. In Study 11 manual and acoustical structural cues were analyzed and compared. Manual structural cues such as loudness and articulation could be accounted for in terms of a combination of acoustical structural cues. In Study 111, the subjective responses of eight individual subjects were analyzed using the affect space obtained in Study 1, and modeled in terms of the structural cues obtained in Study 11, using linear regression modeling. This worked better for the Activity dimension than for the Valence dimension, while the Interest dimension could not be accounted for. Overall, manual structural cues worked better than acoustical structural cues. In a final assessment study, a selected set of acoustical structural cues was used for building prediction models. The results indicate that musical affect attribution can partly be predicted using a combination of acoustical structural cues. Future research may focus on non-linear approaches, elaboration of dataset and subjects, and refinement of acoustical structural cue extraction

    Correlation of gestural musical audio cues and perceived expressive qualities

    No full text
    An empirical study on the perceived semantic quality of musical content and its relationship with perceived structural audio features is presented. In a first study, subjects had to judge a variety of musical excerpts using adjectives describing different emotive/affective/expressive qualities of music. Factor analysis revealed three dimensions, related to valence, activity and interest. In a second study, semantic judgements were then compared with automated and manual structural descriptions of the musical audio signal. Applications of the results in domains of audio-mining, interactive multimedia and brain research are straightforward
    corecore